Search results for "Chain rule"

showing 9 items of 9 documents

The approximate subdifferential of composite functions

1993

This paper deals with the approximate subdifferential chain rule in a Banach space. It establishes specific results when the real-valued function is locally Lipschitzian and the mapping is strongly compactly Lipschitzian.

Mathematics::Functional AnalysisComputer Science::Systems and ControlGeneral MathematicsMathematical analysisComposite numberMathematics::Optimization and ControlBanach spaceApplied mathematicsFunction (mathematics)SubderivativeChain ruleMathematicsBulletin of the Australian Mathematical Society
researchProduct

An Itô Formula for rough partial differential equations and some applications

2020

AbstractWe investigate existence, uniqueness and regularity for solutions of rough parabolic equations of the form $\partial _{t}u-A_{t}u-f=(\dot X_{t}(x) \cdot \nabla + \dot Y_{t}(x))u$ ∂ t u − A t u − f = ( X ̇ t ( x ) ⋅ ∇ + Y ̇ t ( x ) ) u on $[0,T]\times \mathbb {R}^{d}.$ [ 0 , T ] × ℝ d . To do so, we introduce a concept of “differential rough driver”, which comes with a counterpart of the usual controlled paths spaces in rough paths theory, built on the Sobolev spaces Wk,p. We also define a natural notion of geometricity in this context, and show how it relates to a product formula for controlled paths. In the case of transport noise (i.e. when Y = 0), we use this framework to prove a…

Sobolev spacePure mathematicsPartial differential equationMaximum principleProduct (mathematics)UniquenessChain ruleddc:510Parabolic partial differential equationVDP::Matematikk og Naturvitenskap: 400::Matematikk: 410AnalysisDomain (mathematical analysis)Mathematics
researchProduct

The Itô Integral

2014

The Ito integral allows us to integrate stochastic processes with respect to the increments of a Brownian motion or a somewhat more general stochastic process. We develop the Ito integral first for Brownian motion and then for generalized diffusion processes (so called Ito processes). In the third section, we derive the celebrated Ito formula. This is the chain rule for the Ito integral that enables us to do explicit calculations with the Ito integral. In the fourth section, we use the Ito formula to obtain a stochastic solution of the classical Dirichlet problem. This in turn is used in the fifth section in order to show that like symmetric simple random walk, Brownian motion is recurrent …

Stratonovich integralDirichlet problemSection (fiber bundle)Mathematics::ProbabilityStochastic processMathematical analysisLocal martingaleChain ruleDiffusion (business)Brownian motionMathematics
researchProduct

Application of kolmogorov complexity to inductive inference with limited memory

1995

A b s t r a c t . We consider inductive inference with limited memory[l]. We show that there exists a set U of total recursive functions such that U can be learned with linear long-term memory (and no short-term memory); U can be learned with logarithmic long-term memory (and some amount of short-term memory); if U is learned with sublinear long-term memory, then the short-term memory exceeds arbitrary recursive function. Thus an open problem posed by Freivalds, Kinber and Smith[l] is solved. To prove our result, we use Kolmogorov complexity.

Discrete mathematicsHardware_MEMORYSTRUCTURESKolmogorov complexityLogarithmSublinear functionKolmogorov structure functionChain rule for Kolmogorov complexityOpen problemInductive probabilityInductive reasoningMathematics
researchProduct

Response models for mixed binary and quantitative variables

1992

SUMMARY A number of special representations are considered for the joint distribution of qualitative, mostly binary, and quantitative variables. In addition to the conditional Gaussian models and to conditional Gaussian regression chain models some emphasis is placed on models derived from an underlying multivariate normal distribution and on models in which discrete probabilities are specified linearly in terms of unknown parameters. The possibilities for choosing between the models empirically are examined, as well as the testing of independence and conditional independence and the estimation of parameters. Often the testing of independence is exactly or nearly the same for a number of di…

Statistics and ProbabilityChain rule (probability)Applied MathematicsGeneral MathematicsMultivariate normal distributionConditional probability distributionAgricultural and Biological Sciences (miscellaneous)Discriminative modelConditional independenceJoint probability distributionStatisticsStatistics Probability and UncertaintyGeneral Agricultural and Biological SciencesConditional varianceIndependence (probability theory)MathematicsBiometrika
researchProduct

Effects of Kolmogorov complexity present in inductive inference as well

1997

For all complexity measures in Kolmogorov complexity the effect discovered by P. Martin-Lof holds. For every infinite binary sequence there is a wide gap between the supremum and the infimum of the complexity of initial fragments of the sequence. It is assumed that that this inevitable gap is characteristic of Kolmogorov complexity, and it is caused by the highly abstract nature of the unrestricted Kolmogorov complexity.

PHAverage-case complexityDiscrete mathematicsStructural complexity theoryKolmogorov complexityKolmogorov structure functionChain rule for Kolmogorov complexityDescriptive complexity theoryMathematicsQuantum complexity theory
researchProduct

Conditional Versus Joint Probability Assessments

1984

AbstractThe assessment of conditional and / or joint probabilities of events that constitute scenarios is necessary for sound planning, forecasting, and decision making. The assessment process is complex and subtle, and various difficulties are encountered in the elicitation of such probabilities such as, implicit violations ofthe probability calculus and some meaningfjilness conditions. The necessary and sufficient as well as meaningfulness conditions that the elicited information on conditional and joint probabilities must satisfy are evaluated against actual assessments empirically. A high frequency of violation of these conditions was observed in assessing both conditional and joint pro…

021103 operations researchChain rule (probability)Process (engineering)Posterior probability0211 other engineering and technologies02 engineering and technologyComputer Science ApplicationsJoint probability distributionConsistency (statistics)Signal ProcessingStatistics0202 electrical engineering electronic engineering information engineeringEconometricsProbability calculus020201 artificial intelligence & image processingInformation SystemsMathematicsINFOR: Information Systems and Operational Research
researchProduct

The Recycling Gibbs sampler for efficient learning

2018

Monte Carlo methods are essential tools for Bayesian inference. Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning, and statistics, employed to draw samples from complicated high-dimensional posterior distributions. The key point for the successful application of the Gibbs sampler is the ability to draw efficiently samples from the full-conditional probability density functions. Since in the general case this is not possible, in order to speed up the convergence of the chain, it is required to generate auxiliary samples whose information is eventually disregarded. In this work, we show that these auxiliary sample…

FOS: Computer and information sciencesMonte Carlo methodSlice samplingInferenceMachine Learning (stat.ML)02 engineering and technologyBayesian inferenceStatistics - Computation01 natural sciencesMachine Learning (cs.LG)010104 statistics & probabilitysymbols.namesake[INFO.INFO-TS]Computer Science [cs]/Signal and Image ProcessingStatistics - Machine LearningArtificial IntelligenceStatistics0202 electrical engineering electronic engineering information engineering0101 mathematicsElectrical and Electronic EngineeringGaussian processComputation (stat.CO)ComputingMilieux_MISCELLANEOUSMathematicsChain rule (probability)Applied Mathematics020206 networking & telecommunicationsMarkov chain Monte CarloStatistics::ComputationComputer Science - LearningComputational Theory and MathematicsSignal ProcessingsymbolsComputer Vision and Pattern RecognitionStatistics Probability and UncertaintyAlgorithm[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processingGibbs samplingDigital Signal Processing
researchProduct

Exponential inequalities and estimation of conditional probabilities

2006

This paper deals with the problems of typicality and conditional typicality of “empirical probabilities” for stochastic process and the estimation of potential functions for Gibbs measures and dynamical systems. The questions of typicality have been studied in [FKT88] for independent sequences, in [BRY98, Ris89] for Markov chains. In order to prove the consistency of estimators of transition probability for Markov chains of unknown order, results on typicality and conditional typicality for some (Ψ)-mixing process where obtained in [CsS, Csi02]. Unfortunately, lots of natural mixing process do not satisfy this Ψ -mixing condition (see [DP05]). We consider a class of mixing process inspired …

Discrete mathematicssymbols.namesakeChain rule (probability)Mixing (mathematics)Markov chainStatisticssymbolsLaw of total probabilityConditional probabilityAlmost surelyGibbs measureConditional varianceMathematics
researchProduct